40 research outputs found

    Online Learning Discriminative Dictionary with Label Information for Robust Object Tracking

    Get PDF
    A supervised approach to online-learn a structured sparse and discriminative representation for object tracking is presented. Label information from training data is incorporated into the dictionary learning process to construct a robust and discriminative dictionary. This is accomplished by adding an ideal-code regularization term and classification error term to the total objective function. By minimizing the total objective function, we learn the high quality dictionary and optimal linear multiclassifier jointly using iterative reweighed least squares algorithm. Combined with robust sparse coding, the learned classifier is employed directly to separate the object from background. As the tracking continues, the proposed algorithm alternates between robust sparse coding and dictionary updating. Experimental evaluations on the challenging sequences show that the proposed algorithm performs favorably against state-of-the-art methods in terms of effectiveness, accuracy, and robustness

    Online Learning a High-Quality Dictionary and Classifier Jointly for Multitask Object Tracking

    Full text link

    Robust and accurate online pose estimation algorithm via efficient three‐dimensional collinearity model

    Full text link
    In this study, the authors propose a robust and high accurate pose estimation algorithm to solve the perspective‐N‐point problem in real time. This algorithm does away with the distinction between coplanar and non‐coplanar point configurations, and provides a unified formulation for the configurations. Based on the inverse projection ray, an efficient collinearity model in object–space is proposed as the cost function. The principle depth and the relative depth of reference points are introduced to remove the residual error of the cost function and to improve the robustness and the accuracy of the authors pose estimation method. The authors solve the pose information and the depth of the points iteratively by minimising the cost function, and then reconstruct their coordinates in camera coordinate system. In the following, the optimal absolute orientation solution gives the relative pose information between the estimated three‐dimensional (3D) point set and the 3D mode point set. This procedure with the above two steps is repeated until the result converges. The experimental results on simulated and real data show that the superior performance of the proposed algorithm: its accuracy is higher than the state‐of‐the‐art algorithms, and has best anti‐noise property and least deviation by the influence of outlier among the tested algorithms

    Novel event analysis for human-machine collaborative underwater exploration

    No full text
    One of the main task for deep sea submersible is for human-machine collaborative scientific exploration, e.g., human ourselves drive the submersible and monitor cameras around the submersible to observe new species fish or strange topography in a tedious way. In this paper, by defining novel marine animals or any extreme events as novel events, we design a new deep sea novel visual event analysis framework to improve the efficiency of human-machine collaboration and improve the accuracy simultaneously. Specifically, our visual framework concerns diverse functions than most state-of-the-arts, including novel event detection, tracking and summarization. Due to the power and computation resource limitation of the submersible, we design an efficient deep learning based visual saliency method for novel event detection and propose an online object tracking strategy as well. All the experiments are depending on Chinese Jiaolong, the manned deep sea submersible, which mounts several PanCtiltCzoom (PTZ) camera and static cameras. We build a new novel deep sea event dataset and the results justify that our human-machine collaborative visual observation framework can automatically detect, track and summarize the novel deep sea event. (C) 2019 Elsevier Ltd. All rights reserved
    corecore